23 research outputs found

    Small ball probability for the condition number of random matrices

    Full text link
    Let AA be an n×nn\times n random matrix with i.i.d. entries of zero mean, unit variance and a bounded subgaussian moment. We show that the condition number smax(A)/smin(A)s_{\max}(A)/s_{\min}(A) satisfies the small ball probability estimate P{smax(A)/smin(A)n/t}2exp(ct2),t1,{\mathbb P}\big\{s_{\max}(A)/s_{\min}(A)\leq n/t\big\}\leq 2\exp(-c t^2),\quad t\geq 1, where c>0c>0 may only depend on the subgaussian moment. Although the estimate can be obtained as a combination of known results and techniques, it was not noticed in the literature before. As a key step of the proof, we apply estimates for the singular values of AA, P{snk+1(A)ck/n}2exp(ck2),1kn,{\mathbb P}\big\{s_{n-k+1}(A)\leq ck/\sqrt{n}\big\}\leq 2 \exp(-c k^2), \quad 1\leq k\leq n, obtained (under some additional assumptions) by Nguyen.Comment: Some changes according to the Referee's comment

    Convex recovery of a structured signal from independent random linear measurements

    Get PDF
    This chapter develops a theoretical analysis of the convex programming method for recovering a structured signal from independent random linear measurements. This technique delivers bounds for the sampling complexity that are similar with recent results for standard Gaussian measurements, but the argument applies to a much wider class of measurement ensembles. To demonstrate the power of this approach, the paper presents a short analysis of phase retrieval by trace-norm minimization. The key technical tool is a framework, due to Mendelson and coauthors, for bounding a nonnegative empirical process.Comment: 18 pages, 1 figure. To appear in "Sampling Theory, a Renaissance." v2: minor corrections. v3: updated citations and increased emphasis on Mendelson's contribution

    Sparsity and Incoherence in Compressive Sampling

    Get PDF
    We consider the problem of reconstructing a sparse signal x0Rnx^0\in\R^n from a limited number of linear measurements. Given mm randomly selected samples of Ux0U x^0, where UU is an orthonormal matrix, we show that 1\ell_1 minimization recovers x0x^0 exactly when the number of measurements exceeds mConstμ2(U)Slogn, m\geq \mathrm{Const}\cdot\mu^2(U)\cdot S\cdot\log n, where SS is the number of nonzero components in x0x^0, and μ\mu is the largest entry in UU properly normalized: μ(U)=nmaxk,jUk,j\mu(U) = \sqrt{n} \cdot \max_{k,j} |U_{k,j}|. The smaller μ\mu, the fewer samples needed. The result holds for ``most'' sparse signals x0x^0 supported on a fixed (but arbitrary) set TT. Given TT, if the sign of x0x^0 for each nonzero entry on TT and the observed values of Ux0Ux^0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples

    Estimation in high dimensions: a geometric perspective

    Full text link
    This tutorial provides an exposition of a flexible geometric framework for high dimensional estimation problems with constraints. The tutorial develops geometric intuition about high dimensional sets, justifies it with some results of asymptotic convex geometry, and demonstrates connections between geometric results and estimation problems. The theory is illustrated with applications to sparse recovery, matrix completion, quantization, linear and logistic regression and generalized linear models.Comment: 56 pages, 9 figures. Multiple minor change
    corecore